Goto

Collaborating Authors

 part recognition


MVIP -- A Dataset and Methods for Application Oriented Multi-View and Multi-Modal Industrial Part Recognition

Koch, Paul, Schlüter, Marian, Krüger, Jörg

arXiv.org Artificial Intelligence

We present MVIP, a novel dataset for multi-modal and multi-view application-oriented industrial part recognition. Here we are the first to combine a calibrated RGBD multi-view dataset with additional object context such as physical properties, natural language, and super-classes. The current portfolio of available datasets offers a wide range of representations to design and benchmark related methods. In contrast to existing classification challenges, industrial recognition applications offer controlled multi-modal environments but at the same time have different problems than traditional 2D/3D classification challenges. Frequently, industrial applications must deal with a small amount or increased number of training data, visually similar parts, and varying object sizes, while requiring a robust near 100% top 5 accuracy under cost and time constraints. Current methods tackle such challenges individually, but direct adoption of these methods within industrial applications is complex and requires further research. Our main goal with MVIP is to study and push transferability of various state-of-the-art methods within related downstream tasks towards an efficient deployment of industrial classifiers. Additionally, we intend to push with MVIP research regarding several modality fusion topics, (automated) synthetic data generation, and complex data sampling -- combined in a single application-oriented benchmark.


Kaipa

AAAI Conferences

We present an approach geared toward estimating task execution confidence for robotic bin-picking applications. This requires estimating execution confidence for all constituent subtasks including part recognition and pose estimation, singulation, transport, and fine positioning. This paper is focussed on computing associated confidence parameters for the part recognition and pose estimation subtask. In particular, our approach allows a robot to evaluate how good the part recognition and pose estimation is, based on a confidence-measure, and thereby determine whether to proceed with the task execution (part singulation) or to request help from a human in order to resolve the associated failure. The value of a mean-square distance metric at a local minimum where the part matching solution is found is used as a surrogate for the confidence parameter. Experiments with a Baxter robot are used illustrate our approach.


Toward Estimating Task Execution Confidence for Robotic Bin-Picking Applications

Kaipa, Krishnanand N. (University of Maryland, College Park) | Kankanhalli-Nagendra, Akshaya S. (University of Maryland, College Park) | Gupta, Satyandra K. (University of Maryland, College Park)

AAAI Conferences

We present an approach geared toward estimating task execution confidence for robotic bin-picking applications. This requires estimating execution confidence for all constituent subtasks including part recognition and pose estimation, singulation, transport, and fine positioning. This paper is focussed on computing associated confidence parameters for the part recognition and pose estimation subtask. In particular, our approach allows a robot to evaluate how good the part recognition and pose estimation is, based on a confidence-measure, and thereby determine whether to proceed with the task execution (part singulation) or to request help from a human in order to resolve the associated failure. The value of a mean-square distance metric at a local minimum where the part matching solution is found is used as a surrogate for the confidence parameter. Experiments with a Baxter robot are used illustrate our approach.